Session F-4

Activity Sensing

Conference
8:30 AM — 10:00 AM EDT
Local
May 18 Thu, 8:30 AM — 10:00 AM EDT
Location
Babbio 220

BreathSign: Transparent and Continuous In-ear Authentication Using Bone-conducted Breathing Biometrics

Feiyu Han, Panlong Yang and Shaojie Yan (University of Science and Technology of China, China); Haohua Du (Beihang University, China); Yuanhao Feng (University of Science and Technology of China, China)

0
As one of the most natural physiological activities, breathing provides an effective and ubiquitous approach for continuous authentication. Inspired by that, this paper presents BreathSign, which reveals a novel biometric characteristic using bone-conducted human breathing sound and provides an anti-spoofing and transparent authentication mechanism based on inward-facing microphones on commercial earphones. To explore the breathing differences among persons, we first analyze how the breathing sound propagating in the body, and then derive unique body physics-level features from breathing-induced body sounds. Furthermore, to eliminate the impact of behavioral biometrics, we design a triple network model to reconstruct breathing behavior-independent features. Extensive experiments with 20 subjects over a month period have been conducted to evaluate the accuracy, robustness, and vulnerability of BreathSign. The results show that our system accurately authenticates users with an average authentication accuracy rate of 95.17% via only one breathing cycle, and effectively defends against various spoofing attacks with an average spoofing attack detection rate of 98.25%. Compared with other continuous authentication solutions, BreathSign extracts hard-to-forge biometrics in the effortless human breathing activity for authentication and can be easily implemented on commercial earphones with high usability and enhanced security.
Speaker Feiyu Han (University of Science and Technology of China)

Feiyu Han is a PhD student in the LINKE lab at USTC, China. His research area is wearable computing and authentication. He received his B.S. degree from the school of computer science and engineering, Nanjing University of Science and Technology in 2019.



FacER: Contrastive Attention based Expression Recognition via Smartphone Earpiece Speaker

Guangjing Wang, Qiben Yan, Shane Patrarungrong, Juexing Wang and Huacheng Zeng (Michigan State University, USA)

1
Facial expression recognition has been applied to reveal users' emotional status when they interact with digital content. Previous studies consider using cameras or wearable sensors for expression recognition. However, these approaches bring considerable privacy concerns or extra device burdens. Moreover, the recognition performance of camera-based methods deteriorates when users are wearing masks. In this paper, we propose FacER, an active acoustic facial expression recognition system. As a software solution on a smartphone, FacER avoids extra costs of external microphone arrays. FacER extracts facial expression features by modeling the echoes of emitted near-ultrasound signals between the earpiece speaker and the 3D facial contour. Besides isolating a range of background noises, FacER is designed to identify different expressions from various users with a limited set of training data. To achieve this, we propose a contrastive external attention-based model to learn consistent expression feature representations across different users. Extensive experiments with 20 volunteers with or without masks show that FacER can recognize 6 common facial expressions with more than 85% accuracy, outperforming the state-of-the-art acoustic sensing approach by 10% in various real-life scenarios. FacER provides a more robust solution for recognizing the users' expressions in a convenient and usable manner.
Speaker Guangjing Wang (Michigan State University)

Guangjing Wang is a Ph.D. candidate in Computer Science and Engineering, at Michigan State University. His research mainly focuses on mobile sensing, security and privacy. More information can be found at https://guangjing.wang/


Wider is Better? Contact-free Vibration Sensing via Different COTS-RF Technologies

Zhe Chen (China-Singapore International Joint Research Institute, China); Tianyue Zheng (Nanyang Technological University, Singapore); Chao Cai (Huazhong University of Science and Technology, China); Yue Gao (University of Surrey, United Kingdom (Great Britain)); Pengfei Hu (Shandong University, China); Jun Luo (Nanyang Technological University, Singapore)

0
Vibration sensing is crucial to human life and work, as vibrations indicate the status of their respective sources (e.g., heartbeat to human health condition). Given the inconvenience of contact sensing, both academia and industry have been intensively exploring contact-free vibration sensing, with several major developments leveraging radio-frequency (RF) technologies made very recently. However, a measurement study systematically comparing these options is still missing. In this paper, we choose to evaluate five representative commercial off-the-shelf (COTS) RF technologies with different carrier frequencies, bandwidths, and waveform designs. We first unify the sensing data format and processing pipeline, and also propose a novel metric v-SNR to quantify sensing quality. Then our extensive evaluations start from controlled experiments for benchmarking, followed by investigations on two real-world applications: machinery vibration measurement and vital sign monitoring. Our comprehensive study reveals that Wi-Fi performs the worst among all five technologies, while a lesser-known UWB-based technology achieves the best overall performance, and others have respective pros and cons in different scenarios.
Speaker Zhe Chen (Fudan University)

Dr. Zhe Chen is the Co-Founder of AIWiSe Ltd. Inc. He obtained his Ph.D. degree in Computer Science from Fudan University, China, with a 2019 ACM SIGCOMM China Doctoral Dissertation Award. Before joining AIWiSe, he worked as a research fellow at NTU for several years, and his research achievements, along with his efforts in launching products based on them, have thus earned him the 2021 ACM SIGMOBILE China Rising Star Award recently. His current research interests include wireless networking, deep learning, mobile and pervasive computing, and embedded systems.


WakeUp: Fine-Grained Fatigue Detection Based on Multi-Information Fusion on Smart Speakers

Zhiyuan Zhao, Fan Li and Yadong Xie (Beijing Institute of Technology, China); Yu Wang (Temple University, USA)

0
With the development of society and the gradual increase of life pressure, the number of people engaged in mental work and working hours have increased significantly, resulting in more and more people in a state of fatigue. It not only reduces people's work efficiency, but also causes health and safety related problems. The existing fatigue detection systems either have different shortcomings in diverse scenarios or are limited by proprietary equipment, which is difficult to be applied in real life. Motivated by this, we propose a multi-information fatigue detection system named WakeUp based on commercial smart speakers, which is the first to fuse physiological and behavioral information for fine-grained fatigue detection in a non-contact manner. We carefully design a method to simultaneously extract users' physiological and behavioral information based on the MobileViT network and VMD decomposition algorithm respectively. Then, we design a multi-information fusion method based on the statistical features of these two kinds of information. In addition, we adopt an SVM classifier to achieve fine-grained fatigue level. Extensive experiments with 20 volunteers show that WakeUp can detect fatigue with an accuracy of 97.28%. Meanwhile, WakeUp can maintain stability and robustness under different experimental settings.
Speaker Zhiyuan Zhao (Beijing Institute of Technology )

Zhiyuan Zhao received the ME degree in computational mathematics from the Hefei University of Technology in 2020, and BE degree in applied mathematics from the Henan Normal University in 2017. Currently he is a Ph.D. student in the School of Computer Science, Beijing Institute of Technology, Beijing, China. His research interests include mobile computing, mobile health, human-computer interaction, and deep learning. 


Session Chair

Zhichao Cao


Gold Sponsor


Gold Sponsor


Bronze Sponsor


Student Travel Grants


Student Travel Grants


Local Organizer

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · © 2023 Duetone Corp.